1,015 research outputs found

    Effect of Variable Selection Strategy on the Performance of Prognostic Models When Using Multiple Imputation

    Get PDF
    BACKGROUND: Variable selection is an important issue when developing prognostic models. Missing data occur frequently in clinical research. Multiple imputation is increasingly used to address the presence of missing data in clinical research. The effect of different variable selection strategies with multiply imputed data on the external performance of derived prognostic models has not been well examined. METHODS AND RESULTS: We used backward variable selection with 9 different ways to handle multiply imputed data in a derivation sample to develop logistic regression models for predicting death within 1 year of hospitalization with an acute myocardial infarction. We assessed the prognostic accuracy of each derived model in a temporally distinct validation sample. The derivation and validation samples consisted of 11524 patients hospitalized between 1999 and 2001 and 7889 patients hospitalized between 2004 and 2005, respectively. We considered 41 candidate predictor variables. Missing data occurred frequently, with only 13% of patients in the derivation sample and 31% of patients in the validation sample having complete data. Regardless of the significance level for variable selection, the prognostic model developed using only the complete cases in the derivation sample had substantially worse performance in the validation sample than did the models for which variables were selected using the multiply imputed versions of the derivation sample. The other 8 approaches to handling multiply imputed data resulted in prognostic models with performance similar to one another. CONCLUSIONS: Ignoring missing data and using only subjects with complete data can result in the derivation of prognostic models with poor performance. Multiple imputation should be used to account for missing data when developing prognostic models

    Drug eluting balloons for de novo coronary lesions - a systematic review and meta-analysis

    Get PDF
    The role of drug-eluting balloons (DEB) is unclear. Increasing evidence has shown a benefit for the treatment of in-stent restenosis. Its effect on de novo coronary lesions is more controversial. Several smaller randomized trials found conflicting results

    Stochastic Modeling for the Expression of a Gene Regulated by Competing Transcription Factors

    Get PDF
    It is widely accepted that gene expression regulation is a stochastic event. The common approach for its computer simulation requires detailed information on the interactions of individual molecules, which is often not available for the analyses of biological experiments. As an alternative approach, we employed a more intuitive model to simulate the experimental result, the Markov-chain model, in which a gene is regulated by activators and repressors, which bind the same site in a mutually exclusive manner. Our stochastic simulation in the presence of both activators and repressors predicted a Hill-coefficient of the dose-response curve closer to the experimentally observed value than the calculated value based on the simple additive effects of activators alone and repressors alone. The simulation also reproduced the heterogeneity of gene expression levels among individual cells observed by Fluorescence Activated Cell Sorting analysis. Therefore, our approach may help to apply stochastic simulations to broader experimental data

    Derivation and validation of a two‐variable index to predict 30‐day outcomes following heart failure hospitalization

    Get PDF
    Background The LACE index—length of stay (L), acuity (A), Charlson co-morbidities (C), and emergent visits (E)—predicts 30-day outcomes following heart failure (HF) hospitalization but is complex to score. A simpler LE index (length of stay and emergent visits) could offer a practical advantage in point-of-care risk prediction. Methods and results This was a sub-study of the patient-centred care transitions in HF (PACT-HF) multicentre trial. The derivation cohort comprised patients hospitalized for HF, enrolled in the trial, and followed prospectively. External validation was performed retrospectively in a cohort of patients hospitalized for HF. We used log-binomial regression models with LACE or LE as the predictor and either 30-day composite all-cause readmission or death or 30-day all-cause readmission as the outcomes, adjusting only for post-discharge services. There were 1985 patients (mean [SD] age 78.1 [12.1] years) in the derivation cohort and 378 (mean [SD] age 73.1 [13.2] years) in the validation cohort. Increments in the LACE and LE indices were associated with 17% (RR 1.17; 95% CI 1.12, 1.21; C-statistic 0.64) and 21% (RR 1.21; 95% CI 1.15, 1.26; C-statistic 0.63) increases, respectively, in 30-day composite all-cause readmission or death; and 16% (RR 1.16; 95% CI 1.11, 1.20; C-statistic 0.64) and 18% (RR 1.18; 95% CI 1.13, 1.24; C-statistic 0.62) increases, respectively, in 30-day all-cause readmission. The LE index provided better risk discrimination for the 30-day outcomes than did the LACE index in the external validation cohort. Conclusions The LE index predicts 30-day outcomes following HF hospitalization with similar or better performance than the more complex LACE index

    Get screened: a pragmatic randomized controlled trial to increase mammography and colorectal cancer screening in a large, safety net practice

    Get PDF
    Abstract Background Most randomized controlled trials of interventions designed to promote cancer screening, particularly those targeting poor and minority patients, enroll selected patients. Relatively little is known about the benefits of these interventions among unselected patients. Methods/Design "Get Screened" is an American Cancer Society-sponsored randomized controlled trial designed to promote mammography and colorectal cancer screening in a primary care practice serving low-income patients. Eligible patients who are past due for mammography or colorectal cancer screening are entered into a tracking registry and randomly assigned to early or delayed intervention. This 6-month intervention is multimodal, involving patient prompts, clinician prompts, and outreach. At the time of the patient visit, eligible patients receive a low-literacy patient education tool. At the same time, clinicians receive a prompt to remind them to order the test and, when appropriate, a tool designed to simplify colorectal cancer screening decision-making. Patient outreach consists of personalized letters, automated telephone reminders, assistance with scheduling, and linkage of uninsured patients to the local National Breast and Cervical Cancer Early Detection program. Interventions are repeated for patients who fail to respond to early interventions. We will compare rates of screening between randomized groups, as well as planned secondary analyses of minority patients and uninsured patients. Data from the pilot phase show that this multimodal intervention triples rates of cancer screening (adjusted odds ratio 3.63; 95% CI 2.35 - 5.61). Discussion This study protocol is designed to assess a multimodal approach to promotion of breast and colorectal cancer screening among underserved patients. We hypothesize that a multimodal approach will significantly improve cancer screening rates. The trial was registered at Clinical Trials.gov NCT00818857http://deepblue.lib.umich.edu/bitstream/2027.42/78264/1/1472-6963-10-280.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78264/2/1472-6963-10-280.pdfPeer Reviewe

    Enriching for correct prediction of biological processes using a combination of diverse classifiers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Machine learning models (classifiers) for classifying genes to biological processes each have their own unique characteristics in what genes can be classified and to what biological processes. No single learning model is qualitatively superior to any other model and overall precision for each model tends to be low. The classification results for each classifier can be complementary and synergistic suggesting the benefit of a combination of algorithms, but often the prediction probability outputs of various learning models are neither comparable nor compatible for combining. A means to compare outputs regardless of the model and data used and combine the results into an improved comprehensive model is needed.</p> <p>Results</p> <p>Gene expression patterns from NCI's panel of 60 cell lines were used to train a Random Forest, a Support Vector Machine and a Neural Network model, plus two over-sampled models for classifying genes to biological processes. Each model produced unique characteristics in the classification results. We introduce the Precision Index measure (PIN) from the maximum posterior probability that allows assessing, comparing and combining multiple classifiers. The class specific precision measure (PIC) is introduced and used to select a subset of predictions across all classes and all classifiers with high precision. We developed a single classifier that combines the PINs from these five models in prediction and found that the PIN Combined Classifier (PINCom) significantly increased the number of correctly predicted genes over any single classifier. The PINCom applied to test genes that were not used in training also showed substantial improvement over any single model.</p> <p>Conclusions</p> <p>This paper introduces novel and effective ways of assessing predictions by their precision and recall plus a method that combines several machine learning models and capitalizes on synergy and complementation in class selection, resulting in higher precision and recall. Different machine learning models yielded incongruent results each of which were successfully combined into one superior model using the PIN measure we developed. Validation of the boosted predictions for gene functions showed the genes to be accurately predicted.</p

    Clinical phenogroups are more effective than left ventricular ejection fraction categories in stratifying heart failure outcomes

    Get PDF
    Aims Heart failure (HF) guidelines place patients into 3 discrete groups according to left ventricular ejection fraction (LVEF): reduced (<40%), mid-range (40–49%), and preserved LVEF (≥50%). We assessed whether clinical phenogroups offer better prognostication than LVEF. Methods and results This was a sub-study of the Patient-Centered Care Transitions in HF trial. We analysed baseline characteristics of hospitalized patients in whom LVEF was recorded. We used unsupervised machine learning to identify clinical phenogroups and, thereafter, determined associations between phenogroups and outcomes. Primary outcome was the composite of all-cause death or rehospitalization at 6 and 12 months. Secondary outcome was the composite cardiovascular death or HF rehospitalization at 6 and 12 months. Cluster analysis of 1693 patients revealed six discrete phenogroups, each characterized by a predominant comorbidity: coronary heart disease, valvular heart disease, atrial fibrillation (AF), sleep apnoea, chronic obstructive pulmonary disease (COPD), or few comorbidities. Phenogroups were LVEF independent, with each phenogroup encompassing a wide range of LVEFs. For the primary composite outcome at 6 months, the hazard ratios (HRs) for phenogroups ranged from 1.25 [95% confidence interval (CI) 1.00–1.58 for AF] to 2.04 (95% CI 1.62–2.57 for COPD) (log-rank P < 0.001); and at 12 months, the HRs for phenogroups ranged from 1.15 (95% CI 0.94–1.41 for AF) to 1.87 (95% 1.52–3.20 for COPD) (P < 0.002). LVEF-based classifications did not separate patients into different risk categories for the primary outcomes at 6 months (P = 0.69) and 12 months (P = 0.30). Phenogroups also stratified risk of the secondary composite outcome at 6 and 12 months more effectively than LVEF. Conclusion Among patients hospitalized for HF, clinical phenotypes generated by unsupervised machine learning provided greater prognostic information for a composite of clinical endpoints at 6 and 12 months compared with LVEF-based categories. Trial Registration: ClinicalTrials.gov Identifier: NCT0211222

    Recommendations for a core outcome set for measuring standing balance in adult populations: a consensus-based approach

    Get PDF
    Standing balance is imperative for mobility and avoiding falls. Use of an excessive number of standing balance measures has limited the synthesis of balance intervention data and hampered consistent clinical practice.To develop recommendations for a core outcome set (COS) of standing balance measures for research and practice among adults.A combination of scoping reviews, literature appraisal, anonymous voting and face-to-face meetings with fourteen invited experts from a range of disciplines with international recognition in balance measurement and falls prevention. Consensus was sought over three rounds using pre-established criteria.The scoping review identified 56 existing standing balance measures validated in adult populations with evidence of use in the past five years, and these were considered for inclusion in the COS.Fifteen measures were excluded after the first round of scoring and a further 36 after round two. Five measures were considered in round three. Two measures reached consensus for recommendation, and the expert panel recommended that at a minimum, either the Berg Balance Scale or Mini Balance Evaluation Systems Test be used when measuring standing balance in adult populations.Inclusion of two measures in the COS may increase the feasibility of potential uptake, but poses challenges for data synthesis. Adoption of the standing balance COS does not constitute a comprehensive balance assessment for any population, and users should include additional validated measures as appropriate.The absence of a gold standard for measuring standing balance has contributed to the proliferation of outcome measures. These recommendations represent an important first step towards greater standardization in the assessment and measurement of this critical skill and will inform clinical research and practice internationally

    Intrinsic noise alters the frequency spectrum of mesoscopic oscillatory chemical reaction systems

    Get PDF
    Mesoscopic oscillatory reaction systems, for example in cell biology, can exhibit stochastic oscillations in the form of cyclic random walks even if the corresponding macroscopic system does not oscillate. We study how the intrinsic noise from molecular discreteness influences the frequency spectrum of mesoscopic oscillators using as a model system a cascade of coupled Brusselators away from the Hopf bifurcation. The results show that the spectrum of an oscillator depends on the level of noise. In particular, the peak frequency of the oscillator is reduced by increasing noise, and the bandwidth increased. Along a cascade of coupled oscillators, the peak frequency is further reduced with every stage and also the bandwidth is reduced. These effects can help understand the role of noise in chemical oscillators and provide fingerprints for more reliable parameter identification and volume measurement from experimental spectra
    corecore